Recurrent Neural Networks to Approximate the Semantics of Acceptable Logic Programs

نویسندگان

  • Steffen Hölldobler
  • Yvonne Kalinke
  • Hans-Peter Störr
چکیده

In this paper we show that a feedforward neural network with at leastone hiddenlayer can approximate the meaning function TP for an acceptable logic program P . This is found by using the property of acceptable logic programs that for this class of programs the meaning function TP is a contraction mapping on the complete metric space of the interpretations for P as shown by Fitting in [3]. Using this result it can be shown that for an acceptable program such a network can be extended to a recurrent neural networks that is able to approximate the iteration of the meaning function TP , that is the semantics of the logic program P .

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Computation of Normal Logic Programs by Fibring Neural Networks

In this paper, we develop a theory of the integration of fibring neural networks (a generalization of conventional neural networks) into model-theoretic semantics for logic programming. We present some ideas and results about the approximate computation by fibring neural networks of semantic immediate consequence operators TP and TP , where TP denotes a generalization of TP relative to a many-v...

متن کامل

Logic programs and connectionist networks

One facet of the question of integration of Logic and Connectionist Systems, and how these can complement each other, concerns the points of contact, in terms of semantics, between neural networks and logic programs. In this paper, we show that certain semantic operators for propositional logic programs can be computed by feedforward connectionist networks, and that the same semantic operators ...

متن کامل

Approximating the Semantics of Logic Programs by RecurrentNeural

; Abstract. In 8] we have shown how to construct a 3{layer recurrent neural network that computes the iteration of the meaning function T P of a given propositional logic program, what corresponds to the computation of the semantics of the program. In this article we deene a notion of approximation for interpretations and prove that there exists a 3{layer feed forward neural network that approx...

متن کامل

Robust stability of stochastic fuzzy impulsive recurrent neural networks with\ time-varying delays

In this paper, global robust stability of stochastic impulsive recurrent neural networks with time-varyingdelays which are represented by the Takagi-Sugeno (T-S) fuzzy models is considered. A novel Linear Matrix Inequality (LMI)-based stability criterion is obtained by using Lyapunov functional theory to guarantee the asymptotic stability of uncertain fuzzy stochastic impulsive recurrent neural...

متن کامل

Solving Linear Semi-Infinite Programming Problems Using Recurrent Neural Networks

‎Linear semi-infinite programming problem is an important class of optimization problems which deals with infinite constraints‎. ‎In this paper‎, ‎to solve this problem‎, ‎we combine a discretization method and a neural network method‎. ‎By a simple discretization of the infinite constraints,we convert the linear semi-infinite programming problem into linear programming problem‎. ‎Then‎, ‎we use...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1998